Experimentation

Loading Required Libraries

import pandas as pd
import numpy as np
import requests
import json
import os
import mlflow
import datetime
import plotly.graph_objects as go
from great_tables import GT


from statsforecast import StatsForecast
from statsforecast.models import (
    HoltWinters,
    CrostonClassic as Croston, 
    HistoricAverage,
    DynamicOptimizedTheta,
    SeasonalNaive,
    AutoARIMA,
    AutoETS,
    AutoTBATS,
    MSTL

)

from mlforecast import MLForecast
from mlforecast.target_transforms import Differences
from mlforecast.utils import PredictionIntervals
from window_ops.expanding import expanding_mean
from lightgbm import LGBMRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import LinearRegression
from utilsforecast.plotting import plot_series
from statistics import mean

import backtesting

Data

Loading metadata:

raw_json = open("../settings/settings.json")
meta_json = json.load(raw_json)

meta_path = meta_json["meta_path"]
data_path = meta_json["data"]["data_path"]
series_mapping_path = meta_json["data"]["series_mapping_path"]

Loading the dataset:

df = pd.read_csv(data_path)
ts = df[["period", "subba", "y"]].copy()
ts["ds"] = pd.to_datetime(ts["period"])
ts = ts[["ds", "subba", "y"]]
ts = ts.rename(columns={"subba":"unique_id"})

GT(ts.head(10))
/tmp/ipykernel_6793/3287265490.py:1: DtypeWarning:

Columns (4,5,6,8) have mixed types. Specify dtype option on import or set low_memory=False.
ds unique_id y
2022-01-01 00:00:00 ZONA 1707.0
2022-01-01 01:00:00 ZONA 1673.0
2022-01-01 02:00:00 ZONA 1644.0
2022-01-01 03:00:00 ZONA 1605.0
2022-01-01 04:00:00 ZONA 1550.0
2022-01-01 05:00:00 ZONA 1487.0
2022-01-01 06:00:00 ZONA 1422.0
2022-01-01 07:00:00 ZONA 1373.0
2022-01-01 08:00:00 ZONA 1336.0
2022-01-01 09:00:00 ZONA 1317.0
fig = go.Figure()

for i in ts["unique_id"].unique():
  d = None
  d = ts[ts["unique_id"] == i]
  name = i,
  fig.add_trace(go.Scatter(x=d["ds"], 
    y=d["y"], 
    name = i,
    mode='lines'))
    
fig.update_layout(title = "The Hourly Demand for Electricity in New York by Independent System Operator")
fig
fig = plot_series(ts, max_ids= len(ts.unique_id.unique()), 
plot_random=False, 
max_insample_length=24 * 30,
engine = "plotly")
fig.update_layout(title = "The Hourly Demand for Electricity in New York by Independent System Operator")
fig

Models Settings

Loading the backtesting settings:

bkt_settings = meta_json["backtesting"]["settings"]
models_settings = meta_json["backtesting"]["models"]
leaderboard_path = meta_json["backtesting"]["leaderboard_path"]
models_settings.keys()
dict_keys(['model1', 'model2', 'model3', 'model4'])
bkt = backtesting.backtesting(input = ts, 
models = models_settings, 
settings = bkt_settings)
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001655 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 266937, number of used features: 6
[LightGBM] [Info] Start training from score 1563.068468
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001579 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268257, number of used features: 6
[LightGBM] [Info] Start training from score 1561.958936
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001630 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 267201, number of used features: 6
[LightGBM] [Info] Start training from score 1562.871599
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001664 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268521, number of used features: 6
[LightGBM] [Info] Start training from score 1561.800124
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002297 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 267465, number of used features: 6
[LightGBM] [Info] Start training from score 1562.656823
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001542 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268785, number of used features: 6
[LightGBM] [Info] Start training from score 1561.647109
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001749 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 267729, number of used features: 6
[LightGBM] [Info] Start training from score 1562.342567
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001886 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269049, number of used features: 6
[LightGBM] [Info] Start training from score 1561.474617
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001641 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 267993, number of used features: 6
[LightGBM] [Info] Start training from score 1562.105268
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001553 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269313, number of used features: 6
[LightGBM] [Info] Start training from score 1561.280340
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001539 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268257, number of used features: 6
[LightGBM] [Info] Start training from score 1561.958936
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001578 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269577, number of used features: 6
[LightGBM] [Info] Start training from score 1560.995920
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.006368 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268521, number of used features: 6
[LightGBM] [Info] Start training from score 1561.800124
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001506 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269841, number of used features: 6
[LightGBM] [Info] Start training from score 1560.692612
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001585 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 268785, number of used features: 6
[LightGBM] [Info] Start training from score 1561.647109
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001540 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270105, number of used features: 6
[LightGBM] [Info] Start training from score 1560.509502
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001642 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269049, number of used features: 6
[LightGBM] [Info] Start training from score 1561.474617
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002346 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270369, number of used features: 6
[LightGBM] [Info] Start training from score 1560.363903
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001627 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269313, number of used features: 6
[LightGBM] [Info] Start training from score 1561.280340
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001617 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270633, number of used features: 6
[LightGBM] [Info] Start training from score 1560.230859
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001652 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269577, number of used features: 6
[LightGBM] [Info] Start training from score 1560.995920
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001560 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270897, number of used features: 6
[LightGBM] [Info] Start training from score 1560.064309
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001581 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 269841, number of used features: 6
[LightGBM] [Info] Start training from score 1560.692612
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001442 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271161, number of used features: 6
[LightGBM] [Info] Start training from score 1559.877184
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001516 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270105, number of used features: 6
[LightGBM] [Info] Start training from score 1560.509502
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001543 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271425, number of used features: 6
[LightGBM] [Info] Start training from score 1559.603007
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002280 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270369, number of used features: 6
[LightGBM] [Info] Start training from score 1560.363903
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001584 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271689, number of used features: 6
[LightGBM] [Info] Start training from score 1559.341170
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001582 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270633, number of used features: 6
[LightGBM] [Info] Start training from score 1560.230859
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001569 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271953, number of used features: 6
[LightGBM] [Info] Start training from score 1559.180763
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001524 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 270897, number of used features: 6
[LightGBM] [Info] Start training from score 1560.064309
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002354 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 272217, number of used features: 6
[LightGBM] [Info] Start training from score 1559.068542
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001445 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271161, number of used features: 6
[LightGBM] [Info] Start training from score 1559.877184
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001574 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 272481, number of used features: 6
[LightGBM] [Info] Start training from score 1558.914512
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001498 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271425, number of used features: 6
[LightGBM] [Info] Start training from score 1559.603007
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001682 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 272745, number of used features: 6
[LightGBM] [Info] Start training from score 1558.781595
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001556 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271689, number of used features: 6
[LightGBM] [Info] Start training from score 1559.341170
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001603 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 273009, number of used features: 6
[LightGBM] [Info] Start training from score 1558.672466
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002211 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 271953, number of used features: 6
[LightGBM] [Info] Start training from score 1559.180763
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004095 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 828
[LightGBM] [Info] Number of data points in the train set: 273273, number of used features: 6
[LightGBM] [Info] Start training from score 1558.428631
/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003608 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 266937, number of used features: 28
[LightGBM] [Info] Start training from score 1563.068468
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003615 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268257, number of used features: 28
[LightGBM] [Info] Start training from score 1561.958936
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004826 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 267201, number of used features: 28
[LightGBM] [Info] Start training from score 1562.871599
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004001 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268521, number of used features: 28
[LightGBM] [Info] Start training from score 1561.800124
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003496 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 267465, number of used features: 28
[LightGBM] [Info] Start training from score 1562.656823
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.005084 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268785, number of used features: 28
[LightGBM] [Info] Start training from score 1561.647109
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004659 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 267729, number of used features: 28
[LightGBM] [Info] Start training from score 1562.342567
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003543 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269049, number of used features: 28
[LightGBM] [Info] Start training from score 1561.474617
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003566 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 267993, number of used features: 28
[LightGBM] [Info] Start training from score 1562.105268
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004215 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269313, number of used features: 28
[LightGBM] [Info] Start training from score 1561.280340
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003711 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268257, number of used features: 28
[LightGBM] [Info] Start training from score 1561.958936
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003732 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269577, number of used features: 28
[LightGBM] [Info] Start training from score 1560.995920
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003688 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268521, number of used features: 28
[LightGBM] [Info] Start training from score 1561.800124
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003844 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269841, number of used features: 28
[LightGBM] [Info] Start training from score 1560.692612
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006156 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 268785, number of used features: 28
[LightGBM] [Info] Start training from score 1561.647109
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003580 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270105, number of used features: 28
[LightGBM] [Info] Start training from score 1560.509502
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003674 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269049, number of used features: 28
[LightGBM] [Info] Start training from score 1561.474617
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003675 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270369, number of used features: 28
[LightGBM] [Info] Start training from score 1560.363903
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003692 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269313, number of used features: 28
[LightGBM] [Info] Start training from score 1561.280340
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003725 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270633, number of used features: 28
[LightGBM] [Info] Start training from score 1560.230859
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003944 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269577, number of used features: 28
[LightGBM] [Info] Start training from score 1560.995920
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003777 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270897, number of used features: 28
[LightGBM] [Info] Start training from score 1560.064309
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003546 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 269841, number of used features: 28
[LightGBM] [Info] Start training from score 1560.692612
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004576 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271161, number of used features: 28
[LightGBM] [Info] Start training from score 1559.877184
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004111 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270105, number of used features: 28
[LightGBM] [Info] Start training from score 1560.509502
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003788 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271425, number of used features: 28
[LightGBM] [Info] Start training from score 1559.603007
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003904 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270369, number of used features: 28
[LightGBM] [Info] Start training from score 1560.363903
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004916 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271689, number of used features: 28
[LightGBM] [Info] Start training from score 1559.341170
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003741 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270633, number of used features: 28
[LightGBM] [Info] Start training from score 1560.230859
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003791 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271953, number of used features: 28
[LightGBM] [Info] Start training from score 1559.180763
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003665 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 270897, number of used features: 28
[LightGBM] [Info] Start training from score 1560.064309
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003650 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 272217, number of used features: 28
[LightGBM] [Info] Start training from score 1559.068542
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003539 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271161, number of used features: 28
[LightGBM] [Info] Start training from score 1559.877184
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007013 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 272481, number of used features: 28
[LightGBM] [Info] Start training from score 1558.914512
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003764 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271425, number of used features: 28
[LightGBM] [Info] Start training from score 1559.603007
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.103024 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 272745, number of used features: 28
[LightGBM] [Info] Start training from score 1558.781595
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.004331 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271689, number of used features: 28
[LightGBM] [Info] Start training from score 1559.341170
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003825 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 273009, number of used features: 28
[LightGBM] [Info] Start training from score 1558.672466
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003796 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 271953, number of used features: 28
[LightGBM] [Info] Start training from score 1559.180763
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003968 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 6438
[LightGBM] [Info] Number of data points in the train set: 273273, number of used features: 28
[LightGBM] [Info] Start training from score 1558.428631
/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:35: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:36: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:37: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

/workspaces/pydata-ny-ga-workshop/experimentation/backtesting.py:38: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
bkt.head()
unique_id ds y cutoff forecast lower upper model partition label type
0 ZONA 2024-10-15 03:00:00 1719.0 2024-10-15 02:00:00 1691.388604 1584.323694 1798.453515 LGBMRegressor 1 model3 mlforecast
1 ZONA 2024-10-15 04:00:00 1659.0 2024-10-15 02:00:00 1599.683462 1494.126161 1705.240763 LGBMRegressor 1 model3 mlforecast
2 ZONA 2024-10-15 05:00:00 1603.0 2024-10-15 02:00:00 1510.779826 1371.077126 1650.482526 LGBMRegressor 1 model3 mlforecast
3 ZONA 2024-10-15 06:00:00 1561.0 2024-10-15 02:00:00 1457.364322 1320.554811 1594.173833 LGBMRegressor 1 model3 mlforecast
4 ZONA 2024-10-15 07:00:00 1535.0 2024-10-15 02:00:00 1422.605461 1294.304841 1550.906081 LGBMRegressor 1 model3 mlforecast
score = backtesting.bkt_score(bkt = bkt)
score

GT(score.score.head(10))
unique_id label type partition model mape rmse coverage model_unique_id
ZONA model3 mlforecast 1 LGBMRegressor 0.04795582885112342 89.53476672489819 1.0 model3_LGBMRegressor
ZONA model3 mlforecast 1 XGBRegressor 0.04278764003375896 82.8606539629447 0.9583333333333334 model3_XGBRegressor
ZONA model3 mlforecast 1 LinearRegression 0.04974415483793959 105.17840125753148 1.0 model3_LinearRegression
ZONA model3 mlforecast 2 LGBMRegressor 0.02922564601981533 58.36871858852449 1.0 model3_LGBMRegressor
ZONA model3 mlforecast 2 XGBRegressor 0.02232363446732618 46.13631784076604 1.0 model3_XGBRegressor
ZONA model3 mlforecast 2 LinearRegression 0.04368075185728404 84.96291728505464 0.75 model3_LinearRegression
ZONA model3 mlforecast 3 LGBMRegressor 0.05380042540504409 96.62416939561571 0.8333333333333334 model3_LGBMRegressor
ZONA model3 mlforecast 3 XGBRegressor 0.03645340811540917 73.92130427071172 0.625 model3_XGBRegressor
ZONA model3 mlforecast 3 LinearRegression 0.04599614851972928 99.59861444229688 0.9583333333333334 model3_LinearRegression
ZONA model3 mlforecast 4 LGBMRegressor 0.07647545781963037 137.43285512595006 0.375 model3_LGBMRegressor
GT(score.leaderboard.head(10))
model_unique_id unique_id label model type partitions avg_mape avg_rmse avg_coverage
model3_LGBMRegressor ZONA model3 LGBMRegressor mlforecast 20 0.04219627578736639 76.94542166889934 0.7791666666666667
model3_XGBRegressor ZONA model3 XGBRegressor mlforecast 20 0.03883685197974624 70.4043095245232 0.7833333333333333
model3_LinearRegression ZONA model3 LinearRegression mlforecast 20 0.04673586095492148 94.21062426226025 0.8145833333333333
model4_LGBMRegressor ZONA model4 LGBMRegressor mlforecast 20 0.03475930785974278 66.78606235279473 0.8166666666666667
model4_XGBRegressor ZONA model4 XGBRegressor mlforecast 20 0.03447788711098195 66.34120726824291 0.7791666666666667
model4_LinearRegression ZONA model4 LinearRegression mlforecast 20 0.031451369703253865 63.70007892927969 0.81875
model3_LGBMRegressor ZONB model3 LGBMRegressor mlforecast 20 0.05805737222384545 68.18348097058845 0.84375
model3_XGBRegressor ZONB model3 XGBRegressor mlforecast 20 0.05686148418486053 66.22832265740578 0.8041666666666667
model3_LinearRegression ZONB model3 LinearRegression mlforecast 20 0.07412797261195767 88.00839130558488 0.8354166666666667
model4_LGBMRegressor ZONB model4 LGBMRegressor mlforecast 20 0.05280180584546797 63.95391261221789 0.83125
GT(score.top)
model_unique_id unique_id label model type partitions avg_mape avg_rmse avg_coverage
model4_LinearRegression ZONA model4 LinearRegression mlforecast 20 0.031451369703253865 63.70007892927969 0.81875
model4_XGBRegressor ZONB model4 XGBRegressor mlforecast 20 0.04990164204689953 61.50748573552928 0.8270833333333334
model4_LinearRegression ZONC model4 LinearRegression mlforecast 20 0.05883017394892269 109.75741267896122 0.8291666666666667
model4_LGBMRegressor ZOND model4 LGBMRegressor mlforecast 20 0.04610411734303354 32.59497249269365 0.8104166666666667
model4_LinearRegression ZONE model4 LinearRegression mlforecast 20 0.0745053032568893 67.76425655956902 0.8125
model4_LinearRegression ZONF model4 LinearRegression mlforecast 20 0.05212821927381432 75.84396938968229 0.8166666666666667
model4_LinearRegression ZONG model4 LinearRegression mlforecast 20 0.048855291350719504 55.6707171450734 0.80625
model4_XGBRegressor ZONH model4 XGBRegressor mlforecast 20 0.05390935709492725 16.710052356332614 0.7854166666666667
model4_XGBRegressor ZONI model4 XGBRegressor mlforecast 20 0.03842874906638785 27.07394224935995 0.7770833333333333
model4_LGBMRegressor ZONJ model4 LGBMRegressor mlforecast 20 0.025910214240455358 163.31549830977897 0.7833333333333333
model4_LinearRegression ZONK model4 LinearRegression mlforecast 20 0.040560537435138336 95.77579962905139 0.7875
score.top.to_csv(leaderboard_path, index = False)